Despite being responsible for state-of-the-art results in several computer vision and natural language processing tasks, neural networks have faced harsh criticism due to some of their current shortcomings. One of them is that neural networks are correlation machines prone to model biases within the data instead of focusing on actual useful causal relationships. This problem is particularly serious in application domains affected by aspects such as race, gender, and age. To prevent models from incurring on unfair decision-making, the AI community has concentrated efforts in correcting algorithmic biases, giving rise to the research area now widely known as fairness in AI. In this survey paper, we provide an in-depth overview of the main debiasing methods for fairness-aware neural networks in the context of vision and language research. We propose a novel taxonomy to better organize the literature on debiasing methods for fairness, and we discuss the current challenges, trends, and important future work directions for the interested researcher and practitioner.
translated by 谷歌翻译
我们提出了一种使用平滑数值方法来构建大型数据集的模糊簇的新方法。通常会放宽方面的标准,因此在连续的空间上进行了良好的模糊分区的搜索,而不是像经典方法\ cite {hartigan}那样的组合空间。平滑性可以通过使用无限类别的可区分函数,从强烈的非差异问题转换为优化的可区别子问题。为了实现算法,我们使用了统计软件$ r $,并将获得的结果与Bezdek提出的传统模糊$ C $ - 表示方法进行了比较。
translated by 谷歌翻译